Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Proceedings of the 11th International Conference on Data Science, Technology and Applications (Data) ; : 245-256, 2022.
Article in English | Web of Science | ID: covidwho-2044128

ABSTRACT

In a recent official statement, Google highlighted the negative effects of fake reviews on review websites and specifically requested companies not to buy and users not to accept payments to provide fake reviews (Google, 2019). Also, governmental authorities started acting against organisations that show to have a high number of fake reviews on their apps (DigitalTrends, 2018;Gov UK, 2020;ACM, 2017). However, while the phenomenon of fake reviews is well-known in industries as online journalism and business and travel portals, it remains a difficult challenge in software engineering (Martens & Maalej, 2019). Fake reviews threaten the reputation of an organisation and lead to a disvalued source to determine the public opinion about brands. Negative fake reviews can lead to confusion for customers and a loss of sales. Positive fake reviews might also lead to wrong insights about real users' needs and requirements. Although fake reviews have been studied for a while now, there are only a limited number of spam detection models available for companies to protect their corporate reputation. Especially in times with the coronavirus, organisations need to put extra focus on online presence and limit the amount of negative input that affects their competitive position which can even lead to business loss. Given state-of-the-art derived features that can be engineered from review texts, a spam detector based on supervised machine learning is derived in an experiment that performs quite well on the well-known Amazon Mechanical Turk dataset.

2.
3rd International Conference on Artificial Intelligence in HCI, AI-HCI 2022 Held as Part of the 24th HCI International Conference, HCII 2022 ; 13336 LNAI:387-404, 2022.
Article in English | Scopus | ID: covidwho-1877755

ABSTRACT

The Covid-19 pandemic has been a driving force for a substantial increase in online activity and transactions across the globe. As a consequence, cyber-attacks, particularly those leveraging email as the preferred attack vector, have also increased exponentially since Q1 2020. Despite this, email remains a popular communication tool. Previously, in an effort to reduce the amount of spam entering a users inbox, many email providers started to incorporate spam filters into their products. However, many commercial spam filters rely on a human to train the filter, leaving a margin of risk if sufficient training has not occurred. In addition, knowing this, hackers employ more targeted and nuanced obfuscation methods to bypass in-built spam filters. In response to this continued problem, there is a growing body of research on the use of machine learning techniques for spam filtering. In many cases, detection results have shown great promise, but often still rely on human input to classify training datasets. In this study, we explore specifically the use of deep learning as a method of reducing human input required for spam detection. First, we evaluate the efficacy of popular spam detection methods/tools/techniques (freeware). Next, we narrow down machine learning techniques to select the appropriate method for our dataset. This was then compared with the accuracy of freeware spam detection tools to present our results. Our results showed that our deep learning model, based on simple word embedding and global max pooling (SWEM-max) had higher accuracy (98.41%) than both Thunderbird (95%) and Mailwasher (92%) which are based on Bayesian spam filtering. Finally, we postulate whether this improvement is enough to accept the removal of human input in spam email detection. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

3.
6th International Conference on Computer Science and Engineering, UBMK 2021 ; : 383-388, 2021.
Article in English | Scopus | ID: covidwho-1741304

ABSTRACT

With significant usage of social media to socialize in virtual environments, bad actors are now able to use these platforms to spread their maUcious activities such as hate speech, spam, and even phishing to very large crowds. Especially, Twitter is suitable for these types of activities because it is one of the most common social media platforms for microblogging with millions of active users. Moreover, since the end of 2019, Covid-19 has changed the lives of individuals in many ways. While it increased social media usage due to free time, the number of cyber-attacks soared too. To prevent these activities, detection is a very crucial phase. Thus, the main goal of this study is to review the state-of-art in the detection of malicious content and the contribution of AI algorithms for detecting spam and scams effectively in social media. © 2021 IEEE

SELECTION OF CITATIONS
SEARCH DETAIL